Search Results

Documents authored by Hirahara, Shuichi


Document
Regularization of Low Error PCPs and an Application to MCSP

Authors: Shuichi Hirahara and Dana Moshkovitz

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
In a regular PCP the verifier queries each proof symbol in the same number of tests. This number is called the degree of the proof, and it is at least 1/(sq) where s is the soundness error and q is the number of queries. It is incredibly useful to have regularity and reduced degree in PCP. There is an expander-based transformation by Papadimitriou and Yannakakis that transforms any PCP with a constant number of queries and constant soundness error to a regular PCP with constant degree. There are also transformations for low error projection and unique PCPs. Other PCPs are constructed especially to be regular. In this work we show how to regularize and reduce degree of PCPs with a possibly large number of queries and low soundness error. As an application, we prove NP-hardness of an unweighted variant of the collective minimum monotone satisfying assignment problem, which was introduced by Hirahara (FOCS'22) to prove NP-hardness of MCSP^* (the partial function variant of the Minimum Circuit Size Problem) under randomized reductions. We present a simplified proof and sufficient conditions under which MCSP^* is NP-hard under the standard notion of reduction: MCSP^* is NP-hard under deterministic polynomial-time many-one reductions if there exists a function in E that satisfies certain direct sum properties.

Cite as

Shuichi Hirahara and Dana Moshkovitz. Regularization of Low Error PCPs and an Application to MCSP. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 39:1-39:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ISAAC.2023.39,
  author =	{Hirahara, Shuichi and Moshkovitz, Dana},
  title =	{{Regularization of Low Error PCPs and an Application to MCSP}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{39:1--39:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.39},
  URN =		{urn:nbn:de:0030-drops-193411},
  doi =		{10.4230/LIPIcs.ISAAC.2023.39},
  annote =	{Keywords: PCP theorem, regularization, Minimum Circuit Size Problem}
}
Document
Bounded Relativization

Authors: Shuichi Hirahara, Zhenjian Lu, and Hanlin Ren

Published in: LIPIcs, Volume 264, 38th Computational Complexity Conference (CCC 2023)


Abstract
Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ℭ, we say that a statement is ℭ-relativizing if the statement holds relative to every oracle 𝒪 ∈ ℭ. It is easy to see that every result that relativizes also ℭ-relativizes for every complexity class ℭ. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing. First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ε > 0, BPE^{MCSP}/2^{εn} ⊈ SIZE[2ⁿ/n]. We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021). Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ≠ L. For example: - Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ≠ BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ≠ L. - Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ≠ L. - Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ≠ L. In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ≠ EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible.

Cite as

Shuichi Hirahara, Zhenjian Lu, and Hanlin Ren. Bounded Relativization. In 38th Computational Complexity Conference (CCC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 264, pp. 6:1-6:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2023.6,
  author =	{Hirahara, Shuichi and Lu, Zhenjian and Ren, Hanlin},
  title =	{{Bounded Relativization}},
  booktitle =	{38th Computational Complexity Conference (CCC 2023)},
  pages =	{6:1--6:45},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-282-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{264},
  editor =	{Ta-Shma, Amnon},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.6},
  URN =		{urn:nbn:de:0030-drops-182764},
  doi =		{10.4230/LIPIcs.CCC.2023.6},
  annote =	{Keywords: relativization, circuit lower bound, derandomization, explicit construction, pseudodeterministic algorithms, interactive proofs}
}
Document
Kolmogorov Complexity Characterizes Statistical Zero Knowledge

Authors: Eric Allender, Shuichi Hirahara, and Harsha Tirumala

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We show that a decidable promise problem has a non-interactive statistical zero-knowledge proof system if and only if it is randomly reducible via an honest polynomial-time reduction to a promise problem for Kolmogorov-random strings, with a superlogarithmic additive approximation term. This extends recent work by Saks and Santhanam (CCC 2022). We build on this to give new characterizations of Statistical Zero Knowledge SZK, as well as the related classes NISZK_L and SZK_L.

Cite as

Eric Allender, Shuichi Hirahara, and Harsha Tirumala. Kolmogorov Complexity Characterizes Statistical Zero Knowledge. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.ITCS.2023.3,
  author =	{Allender, Eric and Hirahara, Shuichi and Tirumala, Harsha},
  title =	{{Kolmogorov Complexity Characterizes Statistical Zero Knowledge}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.3},
  URN =		{urn:nbn:de:0030-drops-175063},
  doi =		{10.4230/LIPIcs.ITCS.2023.3},
  annote =	{Keywords: Kolmogorov Complexity, Interactive Proofs}
}
Document
Learning Versus Pseudorandom Generators in Constant Parallel Time

Authors: Shuichi Hirahara and Mikito Nanashima

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
A polynomial-stretch pseudorandom generator (PPRG) in NC⁰ (i.e., constant parallel time) is one of the most important cryptographic primitives, especially for constructing highly efficient cryptography and indistinguishability obfuscation. The celebrated work (Applebaum, Ishai, and Kushilevitz, SIAM Journal on Computing, 2006) on randomized encodings yields the characterization of sublinear-stretch pseudorandom generators in NC⁰ by the existence of logspace-computable one-way functions, but characterizing PPRGs in NC⁰ seems out of reach at present. Therefore, it is natural to ask which sort of hardness notion is essential for constructing PPRGs in NC⁰. Particularly, to the best of our knowledge, all the previously known candidates for PPRGs in NC⁰ follow only one framework based on Goldreich’s one-way function. In this paper, we present a new learning-theoretic characterization for PPRGs in NC⁰ and related classes. Specifically, we consider the average-case hardness of learning for well-studied classes in parameterized settings, where the number of samples is restricted to fixed-parameter tractable (FPT), and show that the following are equivalent: - The existence of (a collection of) PPRGs in NC⁰. - The average-case hardness of learning sparse 𝔽₂-polynomials on a sparse example distribution and an NC⁰-samplable target distribution (i.e., a distribution on target functions). - The average-case hardness of learning Fourier-sparse functions on a sparse example distribution and an NC⁰-samplable target distribution. - The average-case hardness of learning constant-depth parity decision trees on a sparse example distribution and an NC⁰-samplable target distribution. Furthermore, we characterize a (single) PPRG in parity-NC⁰ by the average-case hardness of learning constant-degree 𝔽₂-polynomials on a uniform example distribution with FPT samples. Based on our results, we propose new candidates for PPRGs in NC⁰ and related classes under a hardness assumption on a natural learning problem. An important property of PPRGs in NC⁰ constructed in our framework is that the output bits are computed by various predicates; thus, it seems to resist an attack that depends on a specific property of one fixed predicate. Conceptually, the main contribution of this study is to formalize a theory of FPT dualization of concept classes, which yields a meta-theorem for the first result. For the second result on PPRGs in parity-NC⁰, we use a different technique of pseudorandom 𝔽₂-polynomials.

Cite as

Shuichi Hirahara and Mikito Nanashima. Learning Versus Pseudorandom Generators in Constant Parallel Time. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 70:1-70:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ITCS.2023.70,
  author =	{Hirahara, Shuichi and Nanashima, Mikito},
  title =	{{Learning Versus Pseudorandom Generators in Constant Parallel Time}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{70:1--70:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.70},
  URN =		{urn:nbn:de:0030-drops-175736},
  doi =		{10.4230/LIPIcs.ITCS.2023.70},
  annote =	{Keywords: Parallel cryptography, polynomial-stretch pseudorandom generators in NC⁰, PAC learning, average-case complexity, fixed-parameter tractability}
}
Document
Finding Errorless Pessiland in Error-Prone Heuristica

Authors: Shuichi Hirahara and Mikito Nanashima

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
Average-case complexity has two standard formulations, i.e., errorless complexity and error-prone complexity. In average-case complexity, a critical topic of research is to show the equivalence between these formulations, especially on the average-case complexity of NP. In this study, we present a relativization barrier for such an equivalence. Specifically, we construct an oracle relative to which NP is easy on average in the error-prone setting (i.e., DistNP ⊆ HeurP) but hard on average in the errorless setting even by 2^o(n/log n)-size circuits (i.e., DistNP ⊈ AvgSIZE[2^o(n/log n)]), which provides an answer to the open question posed by Impagliazzo (CCC 2011). Additionally, we show the following in the same relativized world: - Lower bound of meta-complexity: GapMINKT^𝒪 ∉ prSIZE^𝒪[2^o(n/log n)] and GapMCSP^𝒪 ∉ prSIZE^𝒪[2^(n^ε)] for some ε > 0. - Worst-case hardness of learning on uniform distributions: P/poly is not weakly PAC learnable with membership queries on the uniform distribution by nonuniform 2ⁿ/n^ω(1)-time algorithms. - Average-case hardness of distribution-free learning: P/poly is not weakly PAC learnable on average by nonuniform 2^o(n/log n)-time algorithms. - Weak cryptographic primitives: There exist a hitting set generator, an auxiliary-input one-way function, an auxiliary-input pseudorandom generator, and an auxiliary-input pseudorandom function against SIZE^𝒪[2^o(n/log n)]. This provides considerable insights into Pessiland (i.e., the world in which no one-way function exists, and NP is hard on average), such as the relativized separation of the error-prone average-case hardness of NP and auxiliary-input cryptography. At the core of our oracle construction is a new notion of random restriction with masks.

Cite as

Shuichi Hirahara and Mikito Nanashima. Finding Errorless Pessiland in Error-Prone Heuristica. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 25:1-25:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2022.25,
  author =	{Hirahara, Shuichi and Nanashima, Mikito},
  title =	{{Finding Errorless Pessiland in Error-Prone Heuristica}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{25:1--25:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.25},
  URN =		{urn:nbn:de:0030-drops-165875},
  doi =		{10.4230/LIPIcs.CCC.2022.25},
  annote =	{Keywords: average-case complexity, oracle separation, relativization barrier, meta-complexity, learning, auxiliary-input cryptography}
}
Document
Symmetry of Information from Meta-Complexity

Authors: Shuichi Hirahara

Published in: LIPIcs, Volume 234, 37th Computational Complexity Conference (CCC 2022)


Abstract
Symmetry of information for time-bounded Kolmogorov complexity is a hypothetical inequality that relates time-bounded Kolmogorov complexity and its conditional analogue. In 1992, Longpré and Watanabe showed that symmetry of information holds if NP is easy in the worst case, which has been the state of the art over the last three decades. In this paper, we significantly improve this result by showing that symmetry of information holds under the weaker assumption that NP is easy on average. In fact, our proof techniques are applicable to any resource-bounded Kolmogorov complexity and enable proving symmetry of information from an efficient algorithm that computes resource-bounded Kolmogorov complexity. We demonstrate the significance of our proof techniques by presenting two applications. First, using that symmetry of information does not hold for Levin’s Kt-complexity, we prove that randomized Kt-complexity cannot be computed in time 2^o(n) on inputs of length n, which improves the previous quasi-polynomial lower bound of Oliveira (ICALP 2019). Our proof implements Kolmogorov’s insightful approach to the P versus NP problem in the case of randomized Kt-complexity. Second, we consider the question of excluding Heuristica, i.e., a world in which NP is easy on average but NP ≠ P, from Impagliazzo’s five worlds: Using symmetry of information, we prove that Heuristica is excluded if the problem of approximating time-bounded conditional Kolmogorov complexity K^t(x∣y) up to some additive error is NP-hard for t ≫ |y|. We complement this result by proving NP-hardness of approximating sublinear-time-bounded conditional Kolmogorov complexity up to a multiplicative factor of |x|^{1/(log log |x|)^O(1)} for t ≪ |y|. Our NP-hardness proof presents a new connection between sublinear-time-bounded conditional Kolmogorov complexity and a secret sharing scheme.

Cite as

Shuichi Hirahara. Symmetry of Information from Meta-Complexity. In 37th Computational Complexity Conference (CCC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 234, pp. 26:1-26:41, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.CCC.2022.26,
  author =	{Hirahara, Shuichi},
  title =	{{Symmetry of Information from Meta-Complexity}},
  booktitle =	{37th Computational Complexity Conference (CCC 2022)},
  pages =	{26:1--26:41},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-241-9},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{234},
  editor =	{Lovett, Shachar},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2022.26},
  URN =		{urn:nbn:de:0030-drops-165880},
  doi =		{10.4230/LIPIcs.CCC.2022.26},
  annote =	{Keywords: resource-bounded Kolmogorov complexity, average-case complexity, pseudorandomness, hardness of approximation, unconditional lower bound}
}
Document
Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions

Authors: Lijie Chen, Shuichi Hirahara, and Neekon Vafa

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP ̸ ⊆ DTIME[2^{Õ(√n)}]. - Σ₂TIME[n] cannot be solved in quasi-linear time on average if Σ_kSAT cannot be solved in time 2^{Õ(√n)} for some constant k. Previously, it was not known if even average-case hardness of Σ₃SAT implies the average-case hardness of Σ₂TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+ε}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+ε} for some constant ε > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest.

Cite as

Lijie Chen, Shuichi Hirahara, and Neekon Vafa. Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 45:1-45:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2022.45,
  author =	{Chen, Lijie and Hirahara, Shuichi and Vafa, Neekon},
  title =	{{Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{45:1--45:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.45},
  URN =		{urn:nbn:de:0030-drops-156411},
  doi =		{10.4230/LIPIcs.ITCS.2022.45},
  annote =	{Keywords: Average-case complexity, worst-case to average-case reduction}
}
Document
Errorless Versus Error-Prone Average-Case Complexity

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions. First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH. Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P. Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators.

Cite as

Shuichi Hirahara and Rahul Santhanam. Errorless Versus Error-Prone Average-Case Complexity. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 84:1-84:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ITCS.2022.84,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{Errorless Versus Error-Prone Average-Case Complexity}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{84:1--84:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.84},
  URN =		{urn:nbn:de:0030-drops-156803},
  doi =		{10.4230/LIPIcs.ITCS.2022.84},
  annote =	{Keywords: average-case complexity, instance checker, pseudorandomness}
}
Document
Excluding PH Pessiland

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string. In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators.

Cite as

Shuichi Hirahara and Rahul Santhanam. Excluding PH Pessiland. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 85:1-85:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.ITCS.2022.85,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{Excluding PH Pessiland}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{85:1--85:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.85},
  URN =		{urn:nbn:de:0030-drops-156819},
  doi =		{10.4230/LIPIcs.ITCS.2022.85},
  annote =	{Keywords: average-case complexity, pseudorandomness, meta-complexity}
}
Document
Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity

Authors: Eric Allender, John Gouwar, Shuichi Hirahara, and Caleb Robelle

Published in: LIPIcs, Volume 212, 32nd International Symposium on Algorithms and Computation (ISAAC 2021)


Abstract
A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT complexity of a string). Both MKTP and MCSP are hard for SZK (Statistical Zero Knowledge) under BPP-Turing reductions; neither is known to be NP-complete. Recently, some hardness results for MKTP were proved that are not (yet) known to hold for MCSP. In particular, MKTP is hard for DET (a subclass of P) under nonuniform ≤^{NC^0}_m reductions. In this paper, we improve this, to show that the complement of MKTP is hard for the (apparently larger) class NISZK_L under not only ≤^{NC^0}_m reductions but even under projections. Also, the complement of MKTP is hard for NISZK under ≤^{P/poly}_m reductions. Here, NISZK is the class of problems with non-interactive zero-knowledge proofs, and NISZK_L is the non-interactive version of the class SZK_L that was studied by Dvir et al. As an application, we provide several improved worst-case to average-case reductions to problems in NP, and we obtain a new lower bound on MKTP (which is currently not known to hold for MCSP).

Cite as

Eric Allender, John Gouwar, Shuichi Hirahara, and Caleb Robelle. Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity. In 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 212, pp. 54:1-54:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.ISAAC.2021.54,
  author =	{Allender, Eric and Gouwar, John and Hirahara, Shuichi and Robelle, Caleb},
  title =	{{Cryptographic Hardness Under Projections for Time-Bounded Kolmogorov Complexity}},
  booktitle =	{32nd International Symposium on Algorithms and Computation (ISAAC 2021)},
  pages =	{54:1--54:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-214-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{212},
  editor =	{Ahn, Hee-Kap and Sadakane, Kunihiko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2021.54},
  URN =		{urn:nbn:de:0030-drops-154875},
  doi =		{10.4230/LIPIcs.ISAAC.2021.54},
  annote =	{Keywords: Kolmogorov Complexity, Interactive Proofs, Minimum Circuit Size Problem, Worst-case to Average-case Reductions}
}
Document
Test of Quantumness with Small-Depth Quantum Circuits

Authors: Shuichi Hirahara and François Le Gall

Published in: LIPIcs, Volume 202, 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)


Abstract
Recently Brakerski, Christiano, Mahadev, Vazirani and Vidick (FOCS 2018) have shown how to construct a test of quantumness based on the learning with errors (LWE) assumption: a test that can be solved efficiently by a quantum computer but cannot be solved by a classical polynomial-time computer under the LWE assumption. This test has lead to several cryptographic applications. In particular, it has been applied to producing certifiable randomness from a single untrusted quantum device, self-testing a single quantum device and device-independent quantum key distribution. In this paper, we show that this test of quantumness, and essentially all the above applications, can actually be implemented by a very weak class of quantum circuits: constant-depth quantum circuits combined with logarithmic-depth classical computation. This reveals novel complexity-theoretic properties of this fundamental test of quantumness and gives new concrete evidence of the superiority of small-depth quantum circuits over classical computation.

Cite as

Shuichi Hirahara and François Le Gall. Test of Quantumness with Small-Depth Quantum Circuits. In 46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 202, pp. 59:1-59:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.MFCS.2021.59,
  author =	{Hirahara, Shuichi and Le Gall, Fran\c{c}ois},
  title =	{{Test of Quantumness with Small-Depth Quantum Circuits}},
  booktitle =	{46th International Symposium on Mathematical Foundations of Computer Science (MFCS 2021)},
  pages =	{59:1--59:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-201-3},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{202},
  editor =	{Bonchi, Filippo and Puglisi, Simon J.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2021.59},
  URN =		{urn:nbn:de:0030-drops-144996},
  doi =		{10.4230/LIPIcs.MFCS.2021.59},
  annote =	{Keywords: Quantum computing, small-depth circuits, quantum cryptography}
}
Document
Hardness of Constant-Round Communication Complexity

Authors: Shuichi Hirahara, Rahul Ilango, and Bruno Loff

Published in: LIPIcs, Volume 200, 36th Computational Complexity Conference (CCC 2021)


Abstract
How difficult is it to compute the communication complexity of a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix? In 2009, Kushilevitz and Weinreb showed that this problem is cryptographically hard, but it is still open whether it is NP-hard. In this work, we show that it is NP-hard to approximate the size (number of leaves) of the smallest constant-round protocol for a two-argument total Boolean function f:[N]×[N] → {0,1}, when it is given as an N×N binary matrix. Along the way to proving this, we show a new deterministic variant of the round elimination lemma, which may be of independent interest.

Cite as

Shuichi Hirahara, Rahul Ilango, and Bruno Loff. Hardness of Constant-Round Communication Complexity. In 36th Computational Complexity Conference (CCC 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 200, pp. 31:1-31:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2021.31,
  author =	{Hirahara, Shuichi and Ilango, Rahul and Loff, Bruno},
  title =	{{Hardness of Constant-Round Communication Complexity}},
  booktitle =	{36th Computational Complexity Conference (CCC 2021)},
  pages =	{31:1--31:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-193-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{200},
  editor =	{Kabanets, Valentine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2021.31},
  URN =		{urn:nbn:de:0030-drops-143055},
  doi =		{10.4230/LIPIcs.CCC.2021.31},
  annote =	{Keywords: NP-completeness, Communication Complexity, Round Elimination Lemma, Meta-Complexity}
}
Document
One-Tape Turing Machine and Branching Program Lower Bounds for MCSP

Authors: Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
For a size parameter s: ℕ → ℕ, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0,1}ⁿ → {0,1} (represented by a string of length N : = 2ⁿ) is at most a threshold s(n). A recent line of work exhibited "hardness magnification" phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ₁ > 0, if MCSP[2^{μ₁⋅ n}] cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N^{1.01}, then P≠NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: 1) A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute MCSP[2^{μ₂⋅n}] in time N^{1.99}, for some constant μ₂ > μ₁. 2) A non-deterministic (or parity) branching program of size o(N^{1.5}/log N) cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nečiporuk method to MKTP, which previously appeared to be difficult. 3) The size of any non-deterministic, co-non-deterministic, or parity branching program computing MCSP is at least N^{1.5-o(1)}. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola 2019). En route, we obtain several related results: 1) There exists a (local) hitting set generator with seed length Õ(√N) secure against read-once polynomial-size non-deterministic branching programs on N-bit inputs. 2) Any read-once co-non-deterministic branching program computing MCSP must have size at least 2^Ω̃(N).

Cite as

Mahdi Cheraghchi, Shuichi Hirahara, Dimitrios Myrisiotis, and Yuichi Yoshida. One-Tape Turing Machine and Branching Program Lower Bounds for MCSP. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 23:1-23:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{cheraghchi_et_al:LIPIcs.STACS.2021.23,
  author =	{Cheraghchi, Mahdi and Hirahara, Shuichi and Myrisiotis, Dimitrios and Yoshida, Yuichi},
  title =	{{One-Tape Turing Machine and Branching Program Lower Bounds for MCSP}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{23:1--23:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.23},
  URN =		{urn:nbn:de:0030-drops-136681},
  doi =		{10.4230/LIPIcs.STACS.2021.23},
  annote =	{Keywords: Minimum Circuit Size Problem, Kolmogorov Complexity, One-Tape Turing Machines, Branching Programs, Lower Bounds, Pseudorandom Generators, Hitting Set Generators}
}
Document
RANDOM
On Nonadaptive Security Reductions of Hitting Set Generators

Authors: Shuichi Hirahara and Osamu Watanabe

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
One of the central open questions in the theory of average-case complexity is to establish the equivalence between the worst-case and average-case complexity of the Polynomial-time Hierarchy (PH). One general approach is to show that there exists a PH-computable hitting set generator whose security is based on some NP-hard problem. We present the limits of such an approach, by showing that there exists no exponential-time-computable hitting set generator whose security can be proved by using a nonadaptive randomized polynomial-time reduction from any problem outside AM ∩ coAM, which significantly improves the previous upper bound BPP^NP of Gutfreund and Vadhan (RANDOM/APPROX 2008 [Gutfreund and Vadhan, 2008]). In particular, any security proof of a hitting set generator based on some NP-hard problem must use either an adaptive or non-black-box reduction (unless the polynomial-time hierarchy collapses). To the best of our knowledge, this is the first result that shows limits of black-box reductions from an NP-hard problem to some form of a distributional problem in DistPH. Based on our results, we argue that the recent worst-case to average-case reduction of Hirahara (FOCS 2018 [Hirahara, 2018]) is inherently non-black-box, without relying on any unproven assumptions. On the other hand, combining the non-black-box reduction with our simulation technique of black-box reductions, we exhibit the existence of a "non-black-box selector" for GapMCSP, i.e., an efficient algorithm that solves GapMCSP given as advice two circuits one of which is guaranteed to compute GapMCSP.

Cite as

Shuichi Hirahara and Osamu Watanabe. On Nonadaptive Security Reductions of Hitting Set Generators. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 15:1-15:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.APPROX/RANDOM.2020.15,
  author =	{Hirahara, Shuichi and Watanabe, Osamu},
  title =	{{On Nonadaptive Security Reductions of Hitting Set Generators}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{15:1--15:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.15},
  URN =		{urn:nbn:de:0030-drops-126182},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.15},
  annote =	{Keywords: hitting set generator, black-box reduction, average-case complexity}
}
Document
Non-Disjoint Promise Problems from Meta-Computational View of Pseudorandom Generator Constructions

Authors: Shuichi Hirahara

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
The standard notion of promise problem is a pair of disjoint sets of instances, each of which is regarded as Yes and No instances, respectively, and the task of solving a promise problem is to distinguish these two sets of instances. In this paper, we introduce a set of new promise problems which are conjectured to be non-disjoint, and prove that hardness of these "non-disjoint" promise problems gives rise to the existence of hitting set generators (and vice versa). We do this by presenting a general principle which converts any black-box construction of a pseudorandom generator into the existence of a hitting set generator whose security is based on hardness of some "non-disjoint" promise problem (via a non-black-box security reduction). Applying the principle to cryptographic pseudorandom generators, we introduce - The Gap(K^SAT vs K) Problem: Given a string x and a parameter s, distinguish whether the polynomial-time-bounded SAT-oracle Kolmogorov complexity of x is at most s, or the polynomial-time-bounded Kolmogorov complexity of x (without SAT oracle) is at least s + O(log|x|). If Gap(K^SAT vs K) is NP-hard, then the worst-case and average-case complexity of PH is equivalent. Under the plausible assumption that E^NP ≠ E, the promise problem is non-disjoint. These results generalize the non-black-box worst-case to average-case reductions of Hirahara [Hirahara, 2018] and improve the approximation error from Õ(√n) to O(log n). Applying the principle to complexity-theoretic pseudorandom generators, we introduce a family of Meta-computational Circuit Lower-bound Problems (MCLPs), which are problems of distinguishing the truth tables of explicit functions from hard functions. Our results generalize the hardness versus randomness framework and identify problems whose circuit lower bounds characterize the existence of hitting set generators. For example, we introduce - The E vs SIZE(2^o(n)) Problem: Given the truth table of a function f, distinguish whether f is computable in exponential time or requires exponential-size circuits to compute. A nearly-linear AC⁰ ∘ XOR circuit size lower bound for this promise problem is equivalent to the existence of a logarithmic-seed-length hitting set generator for AC⁰ ∘ XOR. Under the plausible assumption that E ⊈ SIZE(2^o(n)), the promise problem is non-disjoint (and thus the minimum circuit size is infinity). This is the first result that provides the exact characterization of the existence of a hitting set generator secure against ℭ by the worst-case lower bound against ℭ for a circuit class ℭ = AC⁰ ∘ XOR ⊂ TC⁰. In addition, we prove that a nearly-linear size lower bound against co-nondeterministic read-once branching programs for some "non-disjoint" promise problem is sufficient for resolving RL = L. We also establish the equivalence between the existence of a derandomization algorithm for uniform algorithms and a uniform lower bound for a problem of approximating Levin’s Kt-complexity.

Cite as

Shuichi Hirahara. Non-Disjoint Promise Problems from Meta-Computational View of Pseudorandom Generator Constructions. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 20:1-20:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.CCC.2020.20,
  author =	{Hirahara, Shuichi},
  title =	{{Non-Disjoint Promise Problems from Meta-Computational View of Pseudorandom Generator Constructions}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{20:1--20:47},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.20},
  URN =		{urn:nbn:de:0030-drops-125720},
  doi =		{10.4230/LIPIcs.CCC.2020.20},
  annote =	{Keywords: meta-complexity, pseudorandom generator, hitting set generator}
}
Document
Unexpected Power of Random Strings

Authors: Shuichi Hirahara

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
There has been a line of work trying to characterize BPP (the class of languages that are solvable by efficient randomized algorithms) by efficient nonadaptive reductions to the set of Kolmogorov-random strings: Buhrman, Fortnow, Koucký, and Loff (CCC 2010 [Buhrman et al., 2010]) showed that every language in BPP is reducible to the set of random strings via a polynomial-time nonadaptive reduction (irrespective of the choice of a universal Turing machine used to define Kolmogorov-random strings). It was conjectured by Allender (CiE 2012 [Allender, 2012]) and others that their lower bound is tight when a reduction works for every universal Turing machine; i.e., "the only way to make use of random strings by a nonadaptive polynomial-time algorithm is to derandomize BPP." In this paper, we refute this conjecture under the plausible assumption that the exponential-time hierarchy does not collapse, by showing that the exponential-time hierarchy EXPH can be solved in exponential time by nonadaptively asking the oracle whether a string is Kolmogorov-random or not. In addition, we provide an exact characterization of S_2^{exp} in terms of exponential-time-computable nonadaptive reductions to arbitrary dense subsets of random strings.

Cite as

Shuichi Hirahara. Unexpected Power of Random Strings. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 41:1-41:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.ITCS.2020.41,
  author =	{Hirahara, Shuichi},
  title =	{{Unexpected Power of Random Strings}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{41:1--41:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.41},
  URN =		{urn:nbn:de:0030-drops-117262},
  doi =		{10.4230/LIPIcs.ITCS.2020.41},
  annote =	{Keywords: Kolmogorov-Randomness, Nonadaptive Reduction, BPP, Symmetric Alternation}
}
Document
Beyond Natural Proofs: Hardness Magnification and Locality

Authors: Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Hardness magnification reduces major complexity separations (such as EXP ⊈ NC^1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [Igor Carboni Oliveira and Rahul Santhanam, 2018; Dylan M. McKay et al., 2019; Lijie Chen and Roei Tell, 2019; Igor Carboni Oliveira et al., 2019; Lijie Chen et al., 2019; Igor Carboni Oliveira, 2019; Lijie Chen et al., 2019] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: - Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [Alexander A. Razborov and Steven Rudich, 1997]? - Can we adapt known lower bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem MCSP imply the non-existence of natural proofs. As a corollary of our result, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.

Cite as

Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, and Rahul Santhanam. Beyond Natural Proofs: Hardness Magnification and Locality. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 70:1-70:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2020.70,
  author =	{Chen, Lijie and Hirahara, Shuichi and Oliveira, Igor C. and Pich, J\'{a}n and Rajgopal, Ninad and Santhanam, Rahul},
  title =	{{Beyond Natural Proofs: Hardness Magnification and Locality}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{70:1--70:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.70},
  URN =		{urn:nbn:de:0030-drops-117550},
  doi =		{10.4230/LIPIcs.ITCS.2020.70},
  annote =	{Keywords: Hardness Magnification, Natural Proofs, Minimum Circuit Size Problem, Circuit Lower Bounds}
}
Document
NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits

Authors: Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds. The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [William J. Masek, 1979] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD_2. Our techniques extend to an NP-hardness result for MOD_m gates at the bottom layer under inputs from (Z / m Z)^n.

Cite as

Shuichi Hirahara, Igor C. Oliveira, and Rahul Santhanam. NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 5:1-5:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2018.5,
  author =	{Hirahara, Shuichi and Oliveira, Igor C. and Santhanam, Rahul},
  title =	{{NP-hardness of Minimum Circuit Size Problem for OR-AND-MOD Circuits}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{5:1--5:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.5},
  URN =		{urn:nbn:de:0030-drops-88831},
  doi =		{10.4230/LIPIcs.CCC.2018.5},
  annote =	{Keywords: NP-hardness, Minimum Circuit Size Problem, depth-3 circuits}
}
Document
New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems

Authors: Eric Allender and Shuichi Hirahara

Published in: LIPIcs, Volume 83, 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)


Abstract
The Minimum Circuit Size Problem (MCSP) and a related problem (MKTP) that deals with time-bounded Kolmogorov complexity are prominent candidates for NP-intermediate status. We show that, under very modest cryptographic assumptions (such as the existence of one-way functions), the problem of approximating the minimum circuit size (or time-bounded Kolmogorov complexity) within a factor of n^{1 - o(1)} is indeed NP-intermediate. To the best of our knowledge, these problems are the first natural NP-intermediate problems under the existence of an arbitrary one-way function. We also prove that MKTP is hard for the complexity class DET under non-uniform NC^0 reductions. This is surprising, since prior work on MCSP and MKTP had highlighted weaknesses of "local" reductions such as NC^0 reductions. We exploit this local reduction to obtain several new consequences: * MKTP is not in AC^0[p]. * Circuit size lower bounds are equivalent to hardness of a relativized version MKTP^A of MKTP under a class of uniform AC^0 reductions, for a large class of sets A. * Hardness of MCSP^A implies hardness of MKTP^A for a wide class of sets A. This is the first result directly relating the complexity of MCSP^A and MKTP^A, for any A.

Cite as

Eric Allender and Shuichi Hirahara. New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 83, pp. 54:1-54:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.MFCS.2017.54,
  author =	{Allender, Eric and Hirahara, Shuichi},
  title =	{{New Insights on the (Non-)Hardness of Circuit Minimization and Related Problems}},
  booktitle =	{42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017)},
  pages =	{54:1--54:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-046-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{83},
  editor =	{Larsen, Kim G. and Bodlaender, Hans L. and Raskin, Jean-Francois},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2017.54},
  URN =		{urn:nbn:de:0030-drops-80636},
  doi =		{10.4230/LIPIcs.MFCS.2017.54},
  annote =	{Keywords: computational complexity, Kolmogorov complexity, circuit size}
}
Document
On the Average-Case Complexity of MCSP and Its Variants

Authors: Shuichi Hirahara and Rahul Santhanam

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
We prove various results on the complexity of MCSP (Minimum Circuit Size Problem) and the related MKTP (Minimum Kolmogorov Time-Bounded Complexity Problem): * We observe that under standard cryptographic assumptions, MCSP has a pseudorandom self-reduction. This is a new notion we define by relaxing the notion of a random self-reduction to allow queries to be pseudorandom rather than uniformly random. As a consequence we derive a weak form of a worst-case to average-case reduction for (a promise version of) MCSP. Our result also distinguishes MCSP from natural NP-complete problems, which are not known to have worst-case to average-case reductions. Indeed, it is known that strong forms of worst-case to average-case reductions for NP-complete problems collapse the Polynomial Hierarchy. * We prove the first non-trivial formula size lower bounds for MCSP by showing that MCSP requires nearly quadratic-size De Morgan formulas. * We show average-case superpolynomial size lower bounds for MKTP against AC0[p] for any prime p. * We show the hardness of MKTP on average under assumptions that have been used in much recent work, such as Feige's assumptions, Alekhnovich's assumption and the Planted Clique conjecture. In addition, MCSP is hard under Alekhnovich's assumption. Using a version of Feige's assumption against co-nondeterministic algorithms that has been conjectured recently, we provide evidence for the first time that MKTP is not in coNP. Our results suggest that it might worthwhile to focus on the average-case hardness of MKTP and MCSP when approaching the question of whether these problems are NP-hard.

Cite as

Shuichi Hirahara and Rahul Santhanam. On the Average-Case Complexity of MCSP and Its Variants. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2017.7,
  author =	{Hirahara, Shuichi and Santhanam, Rahul},
  title =	{{On the Average-Case Complexity of MCSP and Its Variants}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.7},
  URN =		{urn:nbn:de:0030-drops-75406},
  doi =		{10.4230/LIPIcs.CCC.2017.7},
  annote =	{Keywords: minimum circuit size problem, average-case complexity, circuit lower bounds, time-bounded Kolmogorov complexity, hardness}
}
Document
Limits of Minimum Circuit Size Problem as Oracle

Authors: Shuichi Hirahara and Osamu Watanabe

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
The Minimum Circuit Size Problem (MCSP) is known to be hard for statistical zero knowledge via a BPP-Turing reduction (Allender and Das, 2014), whereas establishing NP-hardness of MCSP via a polynomial-time many-one reduction is difficult (Murray and Williams, 2015) in the sense that it implies ZPP != EXP, which is a major open problem in computational complexity. In this paper, we provide strong evidence that current techniques cannot establish NP-hardness of MCSP, even under polynomial-time Turing reductions or randomized reductions: Specifically, we introduce the notion of oracle-independent reduction to MCSP, which captures all the currently known reductions. We say that a reduction to MCSP is oracle-independent if the reduction can be generalized to a reduction to MCSP^A for any oracle A, where MCSP^A denotes an oracle version of MCSP. We prove that no language outside P is reducible to MCSP via an oracle-independent polynomial-time Turing reduction. We also show that the class of languages reducible to MCSP via an oracle-independent randomized reduction that makes at most one query is contained in AM intersect coAM. Thus, NP-hardness of MCSP cannot be established via such oracle-independent reductions unless the polynomial hierarchy collapses. We also extend the previous results to the case of more general reductions: We prove that establishing NP-hardness of MCSP via a polynomial-time nonadaptive reduction implies ZPP != EXP, and that establishing NP-hardness of approximating circuit complexity via a polynomial-time Turing reduction also implies ZPP != EXP. Along the way, we prove that approximating Levin's Kolmogorov complexity is provably not EXP-hard under polynomial-time Turing reductions, which is of independent interest.

Cite as

Shuichi Hirahara and Osamu Watanabe. Limits of Minimum Circuit Size Problem as Oracle. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 18:1-18:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hirahara_et_al:LIPIcs.CCC.2016.18,
  author =	{Hirahara, Shuichi and Watanabe, Osamu},
  title =	{{Limits of Minimum Circuit Size Problem as Oracle}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{18:1--18:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.18},
  URN =		{urn:nbn:de:0030-drops-58426},
  doi =		{10.4230/LIPIcs.CCC.2016.18},
  annote =	{Keywords: minimum circuit size problem, NP-completeness, randomized reductions, resource-bounded Kolmogorov complexity, Turing reductions}
}
Document
Identifying an Honest EXP^NP Oracle Among Many

Authors: Shuichi Hirahara

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
We provide a general framework to remove short advice by formulating the following computational task for a function f: given two oracles at least one of which is honest (i.e. correctly computes f on all inputs) as well as an input, the task is to compute f on the input with the help of the oracles by a probabilistic polynomial-time machine, which we shall call a selector. We characterize the languages for which short advice can be removed by the notion of selector: a paddable language has a selector if and only if short advice of a probabilistic machine that accepts the language can be removed under any relativized world. Previously, instance checkers have served as a useful tool to remove short advice of probabilistic computation. We indicate that existence of instance checkers is a property stronger than that of removing short advice: although no instance checker for EXP^NP-complete languages exists unless EXP^NP = NEXP, we prove that there exists a selector for any EXP^NP-complete language, by building on the proof of MIP = NEXP by Babai, Fortnow, and Lund (1991).

Cite as

Shuichi Hirahara. Identifying an Honest EXP^NP Oracle Among Many. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 244-263, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{hirahara:LIPIcs.CCC.2015.244,
  author =	{Hirahara, Shuichi},
  title =	{{Identifying an Honest EXP^NP Oracle Among Many}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{244--263},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.244},
  URN =		{urn:nbn:de:0030-drops-50718},
  doi =		{10.4230/LIPIcs.CCC.2015.244},
  annote =	{Keywords: nonuniform complexity, short advice, instance checker, interactive proof systems, probabilistic checkable proofs}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail